185 research outputs found

    A machine learning driven solution to the problem of perceptual video quality metrics

    Get PDF
    The advent of high-speed internet connections, advanced video coding algorithms, and consumer-grade computers with high computational capabilities has led videostreaming-over-the-internet to make up the majority of network traffic. This effect has led to a continuously expanding video streaming industry that seeks to offer enhanced quality-of-experience (QoE) to its users at the lowest cost possible. Video streaming services are now able to adapt to the hardware and network restrictions that each user faces and thus provide the best experience possible under those restrictions. The most common way to adapt to network bandwidth restrictions is to offer a video stream at the highest possible visual quality, for the maximum achievable bitrate under the network connection in use. This is achieved by storing various pre-encoded versions of the video content with different bitrate and visual quality settings. Visual quality is measured by means of objective quality metrics, such as the Mean Squared Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), Visual Information Fidelity (VIF), and others, which can be easily computed analytically. Nevertheless, it is widely accepted that although these metrics provide an accurate estimate of the statistical quality degradation, they do not reflect the viewer’s perception of visual quality accurately. As a result, the acquisition of user ratings in the form of Mean Opinion Scores (MOS) remains the most accurate depiction of human-perceived video quality, albeit very costly and time consuming, and thus cannot be practically employed by video streaming providers that have hundreds or thousands of videos in their catalogues. A recent very promising approach for addressing this limitation is the use of machine learning techniques in order to train models that represent human video quality perception more accurately. To this end, regression techniques are used in order to map objective quality metrics to human video quality ratings, acquired for a large number of diverse video sequences. Results have been very promising, with approaches like the Video Multimethod Assessment Fusion (VMAF) metric achieving higher correlations to useracquired MOS ratings compared to traditional widely used objective quality metrics

    Security challenges in cyber systems

    Get PDF

    Relationship between Awareness and Use of Digital Information Resources among University Students of Southern Punjab

    Get PDF
    The purpose of this study was to determine the relationship between awareness and use of digital information resources among students of universities in South Punjab. The researcher adopted survey research design and employed questionnaire as a research method to collect the data from respondents. The population of the study consisted on BS, Master, MS, M. Phil and PhD programs students of The Islamia University Bahawalpur and Bahauddin Zakariya University Multan. A convenience sampling technique was used to collect data from respondents. A statistical package for social science (SPSS-20) was used for the analysis of data. The result of the study found that majority of respondents used university library on occasionally basis and most of respondents give preference to digital information resources over printed materials. Most of the respondents agreed that they consult digital information resources for academic work, assignments and for the use of research purposes. Majority of the respondents had skills to used different database like HEC digital library, HEC Summon, Science Direct, JStore etc. A statistically it was found a strong relationship between awareness and use of digital information resources. The barriers faced by respondents while using digital information resources were; slow internet connectivity, respondents’ low ICT skills and limited access to university information resources etc

    Matching pursuit-based compressive sensing in a wearable biomedical accelerometer fall diagnosis device

    Get PDF
    There is a significant high fall risk population, where individuals are susceptible to frequent falls and obtaining significant injury, where quick medical response and fall information are critical to providing efficient aid. This article presents an evaluation of compressive sensing techniques in an accelerometer-based intelligent fall detection system modelled on a wearable Shimmer biomedical embedded computing device with Matlab. The presented fall detection system utilises a database of fall and activities of daily living signals evaluated with discrete wavelet transforms and principal component analysis to obtain binary tree classifiers for fall evaluation. 14 test subjects undertook various fall and activities of daily living experiments with a Shimmer device to generate data for principal component analysis-based fall classifiers and evaluate the proposed fall analysis system. The presented system obtains highly accurate fall detection results, demonstrating significant advantages in comparison with the thresholding method presented. Additionally, the presented approach offers advantageous fall diagnostic information. Furthermore, transmitted data accounts for over 80% battery current usage of the Shimmer device, hence it is critical the acceleration data is reduced to increase transmission efficiency and in-turn improve battery usage performance. Various Matching pursuit-based compressive sensing techniques have been utilised to significantly reduce acceleration information required for transmission.Scopu

    Non invasive skin hydration level detection using machine learning

    Get PDF
    Dehydration and overhydration can help to improve medical implications on health. Therefore, it is vital to track the hydration level (HL) specifically in children, the elderly and patients with underlying medical conditions such as diabetes. Most of the current approaches to estimate the hydration level are not sufficient and require more in-depth research. Therefore, in this paper, we used the non-invasive wearable sensor for collecting the skin conductance data and employed different machine learning algorithms based on feature engineering to predict the hydration level of the human body in different body postures. The comparative experimental results demonstrated that the random forest with an accuracy of 91.3% achieved better performance as compared to other machine learning algorithms to predict the hydration state of human body. This study paves a way for further investigation in non-invasive proactive skin hydration detection which can help in the diagnosis of serious health conditions

    ECG classification using an optimal temporal convolutional network for remote health monitoring

    Get PDF
    Increased life expectancy in most countries is a result of continuous improvements at all levels, starting from medicine and public health services, environmental and personal hygiene to the use of the most advanced technologies by healthcare providers. Despite these significant improvements, especially at the technological level in the last few decades, the overall access to healthcare services and medical facilities worldwide is not equally distributed. Indeed, the end beneficiary of these most advanced healthcare services and technologies on a daily basis are mostly residents of big cities, whereas the residents of rural areas, even in developed countries, have major difficulties accessing even basic medical services. This may lead to huge deficiencies in timely medical advice and assistance and may even cause death in some cases. Remote healthcare is considered a serious candidate for facilitating access to health services for all; thus, by using the most advanced technologies, providing at the same time high quality diagnosis and ease of implementation and use. ECG analysis and related cardiac diagnosis techniques are the basic healthcare methods providing rapid insights in potential health issues through simple visualization and interpretation by clinicians or by automatic detection of potential cardiac anomalies. In this paper, we propose a novel machine learning (ML) architecture for the ECG classification regarding five heart diseases based on temporal convolution networks (TCN). The proposed design, which implements a dilated causal one-dimensional convolution on the input heartbeat signals, seems to be outperforming all existing ML methods with an accuracy of 96.12% and an F1 score of 84.13%, using a reduced number of parameters (10.2 K). Such results make the proposed TCN architecture a good candidate for low power consumption hardware platforms, and thus its potential use in low cost embedded devices for remote health monitoring

    BED: A new dataset for EEG-based biometrics

    Get PDF
    Various recent research works have focused on the use of electroencephalography (EEG) signals in the field of biometrics. However, advances in this area have somehow been limited by the absence of a common testbed that would make it possible to easily compare the performance of different proposals. In this work, we present a dataset that has been specifically designed to allow researchers to attempt new biometric approaches that use EEG signals captured by using relatively inexpensive consumer-grade devices. The proposed dataset has been made publicly accessible and can be downloaded from https://doi.org/10.5281/zenodo.4309471. It contains EEG recordings and responses from 21 individuals, captured under 12 different stimuli across three sessions. The selected stimuli included traditional approaches, as well as stimuli that aim to elicit concrete affective states, in order to facilitate future studies related to the influence of emotions on the EEG signals in the context of biometrics. The captured data were checked for consistency and a performance study was also carried out in order to establish a baseline for the tasks of subject verification and identification

    Automated Detection of Substance-Use Status and Related Information from Clinical Text

    Get PDF
    This study aims to develop and evaluate an automated system for extracting information related to patient substance use (smoking, alcohol, and drugs) from unstructured clinical text (medical discharge records). The authors propose a four-stage system for the extraction of the substance-use status and related attributes (type, frequency, amount, quit-time, and period). The first stage uses a keyword search technique to detect sentences related to substance use and to exclude unrelated records. In the second stage, an extension of the NegEx negation detection algorithm is developed and employed for detecting the negated records. The third stage involves identifying the temporal status of the substance use by applying windowing and chunking methodologies. Finally, in the fourth stage, regular expressions, syntactic patterns, and keyword search techniques are used in order to extract the substance-use attributes. The proposed system achieves an F1-score of up to 0.99 for identifying substance-use-related records, 0.98 for detecting the negation status, and 0.94 for identifying temporal status. Moreover, F1-scores of up to 0.98, 0.98, 1.00, 0.92, and 0.98 are achieved for the extraction of the amount, frequency, type, quit-time, and period attributes, respectively. Natural Language Processing (NLP) and rule-based techniques are employed efficiently for extracting substance-use status and attributes, with the proposed system being able to detect substance-use status and attributes over both sentence-level and document-level data. Results show that the proposed system outperforms the compared state-of-the-art substance-use identification system on an unseen dataset, demonstrating its generalisability
    • …
    corecore